在长时间到数小时的长时间话语中,提高端到端ASR模型的性能是语音识别的持续挑战。一个常见的解决方案是使用单独的语音活动检测器(VAD)事先将音频分割,该声音活动检测器(VAD)纯粹基于声音/非语音信息来决定段边界位置。但是,VAD细分器可能是现实世界语音的最佳选择,例如,一个完整的句子应该整体上可能包含犹豫(“设置... 5点钟的警报”) 。我们建议用端到端的ASR模型替换VAD,能够以流方式预测段边界,从而使细分决定不仅在更好的声学特征上,而且还可以在解码文本的语义特征上进行,并具有可忽略的额外功能计算。在现实世界长音频(YouTube)的实验中,长度长达30分钟,我们证明了相对改善的8.5%,并且与VAD段基线相比,中位段延迟潜伏期的中位数延迟延迟减少了250毫秒。 - ART构象体RNN-T模型。
translated by 谷歌翻译
Lung segmentation in chest X-rays (CXRs) is an important prerequisite for improving the specificity of diagnoses of cardiopulmonary diseases in a clinical decision support system. Current deep learning (DL) models for lung segmentation are trained and evaluated on CXR datasets in which the radiographic projections are captured predominantly from the adult population. However, the shape of the lungs is reported to be significantly different for pediatrics across the developmental stages from infancy to adulthood. This might result in age-related data domain shifts that would adversely impact lung segmentation performance when the models trained on the adult population are deployed for pediatric lung segmentation. In this work, our goal is to analyze the generalizability of deep adult lung segmentation models to the pediatric population and improve performance through a systematic combinatorial approach consisting of CXR modality-specific weight initializations, stacked generalization, and an ensemble of the stacked generalization models. Novel evaluation metrics consisting of Mean Lung Contour Distance and Average Hash Score are proposed in addition to the Multi-scale Structural Similarity Index Measure, Intersection of Union, and Dice metrics to evaluate segmentation performance. We observed a significant improvement (p < 0.05) in cross-domain generalization through our combinatorial approach. This study could serve as a paradigm to analyze the cross-domain generalizability of deep segmentation models for other medical imaging modalities and applications.
translated by 谷歌翻译
使用深度学习方法(DL)方法的结核病(TB)自动分割(TB) - 一致的病变(CXR)可以帮助减少放射科医生的努力,补充临床决策,并有可能改善患者治疗。文献中的大多数作品使用粗边界框注释讨论培训自动分割模型。但是,边界框注释的粒度可能导致在像素级别上包含相当一部分假阳性和负面因素,从而可能对整体语义分割性能产生不利影响。这项研究(i)评估了使用TB一致性病变的细粒注释和(ii)U-NET模型变体的培训和构造的好处CXR。我们使用多种集合方法(例如位和位或位,位 - 最大值和堆叠)评估了分割性能。我们观察到,与单个组成模型和其他集合方法相比,堆叠合奏表现出优异的分割性能(骰子得分:0.5743,95%置信区间:(0.4055,0.7431))。据我们所知,这是第一个应用合奏学习来改善细粒度元素一致性病变细分性能的研究。
translated by 谷歌翻译
气候变化对作物相关的疑虑构成了新的挑战,包括粮食不安全,供应稳定和经济规划。作为中央挑战之一,作物产量预测已成为机器学习领域的按压任务。尽管重要的是,预测任务是特别的复杂性,因为作物产量取决于天气,陆地,土壤质量等各种因素,以及它们的相互作用。近年来,在该域中成功应用了机器学习模型。然而,这些模型要么将他们的任务限制为相对较小的区域,或者只在单个或几年内进行研究,这使得它们难以在空间和时间上概括。在本文中,我们介绍了一种用于作物产量预测的新型图形的复发性神经网络,以纳入模型中的地理和时间知识,进一步提升预测力。我们的方法是在美国大陆的41个州的2000年历史上进行培训,验证和测试,从1981年到2019年覆盖了几年。据我们所知,这是第一种机器学习方法,可在作物产量预测中嵌入地理知识预测全国县级的作物产量。我们还通过应用众所周知的线性模型,基于树的模型,深度学习方法以及比较它们的性能来对与其他机器学习基线进行稳固的基础。实验表明,我们的提出方法始终如一地优于各种指标上现有的现有方法,验证地理空间和时间信息的有效性。
translated by 谷歌翻译
组织病理学分析是对癌前病变诊断的本金标准。从数字图像自动组织病理学分类的目标需要监督培训,这需要大量的专家注释,这可能是昂贵且耗时的收集。同时,精确分类从全幻灯片裁剪的图像斑块对于基于标准滑动窗口的组织病理学幻灯片分类方法是必不可少的。为了减轻这些问题,我们提出了一个精心设计的条件GaN模型,即hostogan,用于在类标签上合成现实组织病理学图像补丁。我们还研究了一种新颖的合成增强框架,可选择地添加由我们提出的HADOGAN生成的新的合成图像补丁,而不是直接扩展与合成图像的训练集。通过基于其指定标签的置信度和实际标记图像的特征相似性选择合成图像,我们的框架为合成增强提供了质量保证。我们的模型在两个数据集上进行评估:具有有限注释的宫颈组织病理学图像数据集,以及具有转移性癌症的淋巴结组织病理学图像的另一个数据集。在这里,我们表明利用具有选择性增强的组织产生的图像导致对宫颈组织病理学和转移性癌症数据集分别的分类性能(分别为6.7%和2.8%)的显着和一致性。
translated by 谷歌翻译
下一篮子推荐考虑将一组项目推荐到用户将作为一个整体购买的下一个篮子。在本文中,我们为下一个篮子推荐开发了一种具有偏好,普及和转换(M2)的新颖混合模型。该方法在下一个篮子生成过程中模拟了三个重要因素:1)用户在项目中的“全球偏好”,2)项目的“全球受欢迎者和3”的过渡模式。与现有的基于内部网络的方法不同,M2不使用复杂的网络来模拟项目之间的转换,或为用户生成嵌入品。相反,它具有基于简单的编码器解码器的方法(ED-Trans),以更好地模拟项目之间的转换模式。我们将M2与不同组合的组合进行了比较,其中有5个现有的下一篮子推荐方法在4个公共基准数据集上推荐第一个,第二和第三篮子。我们的实验结果表明,M2显着优于所有任务中所有数据集的最先进的方法,提高了高达22.1%。此外,我们的消融研究表明,在推荐性能方面,ED-Trans比经常性神经网络更有效。我们还对下一个篮子推荐评估进行了彻底讨论了各种实验协议和评估指标。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译